Current Research in Neurobiology
○ Elsevier BV
Preprints posted in the last 90 days, ranked by how well they match Current Research in Neurobiology's content profile, based on 14 papers previously published here. The average preprint has a 0.00% match score for this journal, so anything above that is already an above-average fit.
Gastaldon, S.; Gheller, F.; Bonfiglio, N.; Brotto, D.; Bottari, D.; Trevisi, P.; Martini, A.; Vespignani, F.; Peressotti, F.
Show abstract
This study provides the first neurophysiological evidence of how cochlear implant (CI) input affects predictive processing during audiovisual language comprehension in deaf individuals. Using EEG, we compared 18 CI users with 18 normal-hearing (NH) controls during sentence comprehension where final word predictability was determined by high or low semantic constraint (HC vs. LC) of the preceding sentence frame. Between sentence frame and final word, a 800 ms silent gap was introduced. Mouth visibility was manipulated during sentence frames (visible or digitally occluded; V+ vs. V-), while the final words were always presented with the mouth visible. In NH participants, lower-beta power (12-15 Hz) in left frontal and central sensors decreased for HC vs. LC contexts during the pre-target silent gap, but only when the mouths was visible, suggesting active prediction generation. In CI users, this lower beta power decrease was absent. After final word presentation, both groups showed N400 predictability effects, indicating preserved prediction evaluation. However, CI users exhibited extended N400 effects in the V+ condition, suggesting additional processing demands. Across all participants, pre-target beta modulations correlated with language production abilities, supporting prediction-by-production frameworks. Within CI users, poorer audiometric thresholds correlated with larger N400 constraint effects, possibly indicating greater reliance on contextual prediction to compensate for degraded sensory input. These findings demonstrate that CI-mediated perception alters the neural mechanisms of prediction generation. The link between production skills and predictive mechanisms suggests that strengthening expressive language abilities may enhance predictive processing in CI users.
O'Connor, S. A.; Narain, P.; Mahajan, A.; Bancroft, G. L.; Haas, H. A.; Wallen-Friedman, E.; Vasisht, S.; Takano, H.; Kiffer, F. C.; Eisch, A. J.; Yun, S.
Show abstract
Environmental stressors rarely affect just one brain circuit. Most studies assess single cognitive endpoints, obscuring whether vulnerabilities are global or circuit-selective and how effects distribute across interconnected systems. To address this, we used galactic cosmic radiation (GCR), a Mars mission-relevant stressor that disrupts the hippocampal-nucleus accumbens-prefrontal circuit. C57BL/6J mice received 33-ion GCR simulation (33-GCR, 0.75 Gy) or sham radiation with the Nrf2-activating compound CDDO-EA or vehicle, followed by multi-domain behavioral testing in both sexes. Under very high memory load, male Veh/33-GCR mice showed enhanced pattern separation compared to Veh/Sham males, an effect normalized by CDDO-EA. Female mice showed no radiation-induced changes in pattern separation but weighed 9-18% more than Veh/Sham females and had reduced locomotor activity. Reward-based learning differed by sex: males showed no changes, while female Veh/33-GCR mice displayed enhanced reward anticipation that was further increased by CDDO-EA alone, with both treatments contributing to elevated goal-tracking. For behavioral flexibility, CDDO-EA impaired reversal learning in males regardless of radiation, while 33-GCR impaired reversal learning in females regardless of CDDO-EA. Principal component analysis revealed that treatments disrupted specific circuit relationships while leaving others intact, consistent with selective rather than global cognitive effects. Fiber photometry showed enhanced dentate gyrus encoding activity in irradiated males under high memory load. Combined CDDO-EA/33-GCR selectively reduced dentate gyrus progenitors in females. Males and females showed distinct, circuit-selective vulnerability patterns, demonstrating that multi-domain, both-sex assessment is necessary to capture how stressors and interventions affect integrated brain function. CDDO-EA proved to be a double-edged sword: protecting one cognitive domain while impairing another, a trade-off invisible to single-endpoint assessment. This framework has immediate relevance for astronaut risk assessment and extends to any context where neuroprotective interventions are evaluated against environmental stressors.
Riegel, J.; Schüller, A.; Wissmann, A.; Zeiler, S.; Kolossa, D.; Reichenbach, T.
Show abstract
Seeing a speakers face can significantly aid understanding, particularly in challenging acoustic environments. An early neural response implicated in audiovisual speech processing is the frequency-following response (speech-FFR), which occurs at the fundamental frequency of the speech signal. This response arises from both subcortical areas and the auditory cortex. Previous studies have shown that subcortical responses are reduced when bimodal stimulation includes visual input from the talkers face. Here, we examined the cortical contribution to the speech-FFR and its potential modulation by visual information. We recorded MEG responses to four types of audiovisual signals: a still image, an artificially generated avatar, a degraded video, and a natural video. The audio stimuli were presented in a substantial level of background noise to make behavioral audiovisual effects stand out. Speech-in-noise comprehension increased significantly from the audio-only condition to the avatar and the degraded video, and further to the natural video. Moreover, we found that all types of audiovisual stimuli yielded robust speech-FFRs in the auditory cortex at an early latency of around 30 ms. However, the magnitude of this neural response was neither enhanced nor attenuated by the videos, nor could the cortical contribution of the speech-FFR explain a significant portion of the variance in the behavioral comprehension scores. Our results suggest that visual modulation of the speech-FFR in the auditory cortex is, if existent, too small to be measurable in scenarios where speech occurs in considerable background noise.
Zheng, Y.; Chen, L.
Show abstract
Perceptual processing integrates information from multiple sensory modalities to form a coherent representation of the environment. A classic example of such is the Sound-Induced Flash Illusion (SIFI), where the perceived number of visual flashes is altered by conflicting auditory stimuli. While the SIFI is a well-established phenomenon of multisensory integration, the influence of physical spatial characteristics--specifically stimulus eccentricity and spatial congruence--on integration levels remains debated.To address this gap, this study used the SIFI paradigm to investigate the effect of visual stimulus spatial location and the spatial congruence between auditory and visual stimuli on audiovisual integration. In Experiments 1 and 2, we found that when spatial attention was controlled via cueing, unimodal visual performance remained consistent across locations. However, the susceptibility to SIFI increased progressively from the central to the peripheral visual field, exhibiting a spatial pattern of Gaussian distribution. Bayesian modeling further supported this by showing that this spatial modulation was driven by an increase in the integration weight assigned audiovisual representations in the periphery, rather than changes in sensory uncertainty alone. Conversely, Experiment 3 demonstrated that the spatial congruence of audiovisual stimuli did not affect the SIFI or alter the integration processing. These findings refine our current understanding of the spatial modulation upon audiovisual integration. By incorporating the visual systems spatial properties into a Bayesian framework, we provide a computational explanation for the eccentricity-dependent nature of multisensory integration.
Abraham, I.; Ajmera, S.; Zhang, W.; Leaver, A. M.; Sutton, B. P.; Peelle, J. E.; Husain, F. T.
Show abstract
The impact of age and hearing loss on the brain has garnered significant attention, as both factors have been implicated in the development of cognitive impairment or dementia. In this study, we investigated the impact of hearing loss and tinnitus on gray matter in the brain, while accounting for age. We used a comprehensive secondary analysis of structural MRI data obtained from multiple research sites (256 unique individuals) using voxel-based and surface-based morphology. After harmonization of this multi-site brain data, our research replicated the previously reported finding of age-related decline in total cortical volume, but there was no significant effect of either hearing loss or tinnitus on total cortical volume. When a region of interest analysis was conducted, the hippocampus emerged as the only brain region that showed a direct impact of hearing loss, after accounting for variance associated with age. This effect on hippocampal volume was evident in our sample from age 52 years onwards; when adjusted for hearing loss, the decline began at age 56 years. For the presence of tinnitus, ventral posterior cingulate gyrus showed main effects with respect to cortical volume and surface area while medial occipito-temporal gyrus and operculum of the inferior frontal gyrus showed significant main effects only with surface area. Post-hoc analysis revealed that posterior cingulate gyrus showed significantly higher volume and larger surface area in individuals with tinnitus compared to those without tinnitus. Similarly medial occipito-temporal gyrus surface area was increased whereas surface area of the inferior frontal opercular gyrus was reduced in those with tinnitus when compared to those without tinnitus. Notably, while past studies have reported that the presence of tinnitus appeared to moderate some of these effects in certain participant groups, our results suggest a more complex relationship between sensory degradation, chronic tinnitus, and brain structure in individuals across the adult lifespan. HighlightsO_LIHearing loss and tinnitus can exacerbate regional brain atrophy in the adult lifespan. C_LIO_LIHigh-frequency hearing loss affects auditory cortex gray matter volume to a larger degree in older age. C_LIO_LIHearing loss may accelerate decline in hippocampal volume by about 4 years. C_LIO_LIChronic subjective tinnitus is associated with a larger volume of cingulate cortex, increased surface area in cingulate cortex and the lingual gyrus, and decreased surface area of frontal operculum compared to controls. C_LIO_LITinnitus-related effects on regional brain atrophy are not modified by the degree of hearing deficits. C_LI
Keshavarzi, M.; Moore, B. C. J.; Goswami, U.
Show abstract
Neural oscillations in the delta (0.5-4 Hz) and theta (4-8 Hz) bands play a key role in tracking the temporal structure of speech. According to Temporal Sampling (TS) theory, dyslexia arises from atypical entrainment of these low-frequency oscillations to speech during infancy and childhood, which is particularly disruptive regarding phonological encoding. However, studies of adults with dyslexia have rarely examined both delta and theta cortical tracking under naturalistic listening conditions, and have not measured delta-band cortical tracking. Using EEG, here we focused on delta and theta band cortical tracking continuous natural speech by adults with and without dyslexia, applying a decoding analysis previously used with dyslexic children. Forty-eight English-speaking adults (24 dyslexic, 24 control) listened to a 16-minute continuous spoken narrative while EEG was recorded. Neural decoding of the speech envelope was quantified using backward multivariate Temporal Response Function (mTRF) models applied at two levels: a between-group analysis evaluating group-level differences in neural representation patterns, and a within-participant analysis assessing individual decoding accuracy. Cerebro-acoustic coherence was computed in parallel to provide a complementary measure of neural-speech synchronisation. Additional analyses examined band power, cross-frequency phase-amplitude coupling (PAC), and cross-frequency phase-phase coupling (PPC). Dyslexic adults exhibited less accurate delta- and theta-band decoding in the between-group analysis and reduced theta-band decoding accuracy in the within-participant analysis, alongside reduced coherence in both bands and increased delta-band power, particularly over the right temporal region. No group differences were found for PAC or PPC. HighlightsO_LIAdults with dyslexia showed reduced delta- and theta-band speech decoding C_LIO_LICerebro-acoustic coherence was reduced in delta and theta bands in dyslexia group C_LIO_LIDelta-band power was increased in dyslexia, especially over right temporal region C_LIO_LICross-frequency coupling did not differ between adults with and without dyslexia C_LI
King, C. D.; Zhu, T.; Groh, J. M.
Show abstract
Information about eye movements is necessary for linking auditory and visual information across space. Recent work has suggested that such signals are incorporated into processing at the level of the ear itself (Gruters, Murphy et al. 2018). Here we report confirmation that the eye movement signals that reach the ear can produce perceptual consequences, via a case report of an unusual participant with tensor tympani myoclonus who hears sounds when she moves her eyes. The sounds she hears could be recorded with a microphone in the ear in which she hears them (left), and occurred for large leftward eye movements to extreme orbital positions of the eyes. The sounds elicited by this participants eye movements were reminiscent of eye movement-related eardrum oscillations (EMREOs, (Gruters, Murphy et al. 2018, Brohl and Kayser 2023, King, Lovich et al. 2023, Lovich, King et al. 2023, Lovich, King et al. 2023, Abbasi, King et al. 2025, Sotero Silva, Kayser et al. 2025, King and Groh 2026, Leon, Ramos et al. 2026, Sotero Silva, Brohl et al. 2026)), but were larger and longer lasting than classical EMREOs, helping to explain why they were audible to her. Overall, the observations from this patient help establish that (a) eye movement-related signals specifically reach the tensor tympani muscle and that (b) when there is an abnormality involving that muscle, such signals can lead to actual audible percepts. Given that the tensor tympani contributes to the regulation of sound transmission in the middle ear, these findings support that eye movement signals reaching the ear have functional consequences for auditory perception. The findings also expand the types of medical conditions that produce gaze-evoked tinnitus, to date most commonly observed in connection with acoustic neuromas.
MICHEAL, A. S. M.; BANDYOPADHAY, S.
Show abstract
Auditory neurons, in the midbrain and beyond, detect changes in repeating acoustic patterns. Most studies focus on mechanisms underlying such sensitivity and adaptation to regularity. However, regular sound patterns are crucial in social communication, stream-segregation and grouping in different species including humans. Thus, we address cortical selectivity to periodic or aperiodic sound sequences with multiple stimulus attributes. With single unit electrophysiology and two-photon calcium imaging in anesthetized and awake mouse auditory cortex, we observe subpopulations of neurons selective to periodicity or aperiodicity that lack generalization across period-length, frequency-content or inter-token-interval durations. Comparing results with or without inter-token-interval spiking activity, shows its profound role underlying selectivity to periodic or aperiodic sequences. The whole population average rate for periodic and aperiodic stimuli is identical but not following each period and stimulus-off-period. Hence, inter-token-interval activity, post each period increases during the sequence providing information on selectivity and a prediction like signal. HighlightsO_LIContrary to common view, subpopulations of neurons in the auditory cortex are selective to repetitive sound patterns or periodic sound sequences and another to aperiodic ones. C_LIO_LINeither population of neurons above generalizes their selectivity across properties of tokens of the sequences. C_LIO_LINeural activity during the inter-token interval plays an important role in enhancing the observed selectivity. C_LIO_LIPost period or pre-subsequent period activity builds up during the stimulus providing a prediction like signal for periodic sequences with respect to aperiodic sequences. C_LI
Calemi, C.; Bruffaerts, R.; Ellender, T. J.
Show abstract
This systematic review examines the effects of neurodegeneration in rats and mice on ultrasonic vocalisation (USV) production and its underlying neuronal substrates. Neurodegenerative diseases, such as Parkinsons, Alzheimers, and frontotemporal degeneration, significantly impair communication abilities in humans. Animal models, particularly rats and mice, are widely used to study the underlying mechanisms of these disorders. One important aspect of neurodegeneration is its impact on ultrasonic vocalisations in rodents. USVs play a crucial role in social interaction, mating, and distress signalling, making them valuable behavioural biomarkers for neurological dysfunction. This review aims to synthesise existing research on how neurodegeneration affects USV production and its neuronal substrates in rodent models. Understanding these changes can provide insights into disease progression and facilitate the development of early diagnostic tools and therapeutic strategies. Studying USV impairments in animal models may help identify biomarkers relevant to human speech deficits in neurodegenerative diseases. By bridging the gap between preclinical and clinical research, this review contributes to the growing field of neurobehavioral biomarkers, which could ultimately improve early diagnosis and intervention in human neurodegenerative conditions.
Wouters, M.; Gaudrain, E.; Dapper, K.; Schirmer, J.; Baskent, D.; Ruettiger, L.; Knipper, M.; Verhulst, S.
Show abstract
Speech perception difficulties in noise are common among older adults and individuals with hearing impairment, even when audiometric thresholds appear normal. We examined how aging, cochlear synaptopathy (CS), and outer hair cell (OHC) damage affect speech encoding and phoneme discrimination. Envelope-following responses (EFRs) to rectangular amplitude-modulated (RAM) tones and speech-like phoneme pairs were recorded in quiet using EEG, and behavioral discrimination was assessed in quiet, ipsilateral, and contralateral noise. Stimuli were designed to target temporal envelope (TENV) or temporal fine structure (TFS) encoding. Results showed that RAM-EFR amplitudes decreased gradually with age, consistent with emerging CS, while magnitudes of high-frequency TENV-based EFRs in quiet were most reduced in older hearing-impaired listeners with combined CS and OHC damage. In contrast, EFRs targeting low-frequency TENV encoding in quiet remained preserved. Behaviorally, phoneme discrimination of TFS contrasts worsened with OHC loss and age in quiet and contralateral noise, respectively, while there was no significant effect of age on the discrimination of TENV contrasts. Considering that high-frequency contrasts are discriminated via place-based spectral cues, low-frequency contrasts rely on TFS, and the EFR reflects primarily TENV, this framework explains why EFRs decline for high-frequency cues without perceptual loss, while EFRs remain stable for low-frequency cues even as TFS-based discrimination deteriorates. These findings highlight the need for further investigation into how neural coding deficits relate to perceptual outcomes. Combining electro-physiological and behavioral measures might provide a sensitive framework for detecting subclinical auditory deficits to earlier diagnose age-related and hidden hearing loss. HighlightsO_LISpeech-evoked EEG shows OHC loss-related decline of high-CF enve- lope encoding. C_LIO_LISpeech-evoked EEG shows low-CF envelope encoding stays intact with age. C_LIO_LIFine-structure contrast discrimination worsens with OHC loss in quiet. C_LIO_LIFine-structure contrast discrimination worsens with age in contralateral noise. C_LIO_LIHigh-frequency place-based spectral cues discrimination remains robust with age. C_LIO_LIPeripheral coding strength is not directly reflected at behavioral level. C_LI
Palou, A.; Tagliabue, M.; Beraneck, M.; Llorens, J.
Show abstract
The rat vestibular system plays a critical role in anti-gravity responses such as the tail-lift reflex and the air-righting reflex. In a previous study in male rats, we obtained evidence that these two reflexes depend on the function of non-identical populations of vestibular sensory hair cells (HC). Here, we caused graded lesions in the vestibular system of female rats by exposing the animals to several different doses of an ototoxic chemical, 3,3-iminodipropionitrile (IDPN). After exposure, we assessed the anti-gravity responses of the rats and then assessed the loss of type I HC (HCI) and type II HC (HCII) in the central and peripheral regions of the crista, utricle and saccule. As expected, we recorded a dose-dependent loss of vestibular function and loss of HCs. The relationship between hair cell loss and functional loss was examined using non-linear models fitted by orthogonal distance regression. The results indicated that both the tail-lift reflex and the air-righting reflexes mostly depend on HCI function. However, a different dependency was found on the epithelium triggering the reflex: while the tail-lift response is sensitive to loss of crista and/or utricle HCIs, the air-righting response rather depends on utricular and/or saccular integrity.
Cederroth, C. R.
Show abstract
Age-related hearing loss is the leading sensory deficit in older adults, yet audiometric thresholds at conventional frequencies often poorly predict speech understanding. Two competing hypotheses have emerged: extended high-frequency (eHF) hearing loss beyond 8 kHz may unmask variance in speech performance, while hidden hearing loss from cochlear synaptopathy---detectable via auditory brainstem response (ABR) wave I amplitude reduction---may degrade temporal coding independent of audiometry. Here, in 526 ears from 263 tinnitus-free adults in the Swedish Tinnitus Outreach Project (STOP) cohort, we show that eHF pure-tone average (10-16 kHz) is the single most age-sensitive auditory measure, explaining 64% of age-related variance (R{superscript 2} = 0.64) compared to only 16% for conventional audiometry (R{superscript 2} = 0.16). Moreover, eHF thresholds robustly predict both word and phoneme recognition in speech-weighted noise (+4 dB SNR), explaining 34-36% of speech variance (R{superscript 2} = 0.34-0.36)--substantially exceeding conventional pure-tone average (22-25%) and all ABR features (5-13%). In contrast, ABR Wave I amplitude--the putative marker of cochlear synaptopathy--contributes no additional explanatory power even in high-reliability recordings (ICC = 0.96). These findings challenge the translational relevance of cochlear synaptopathy to age-related speech deficits and suggest conduction delays, not synaptic loss, as the peripheral neural mechanism underlying speech comprehension decline in aging.
Andrew, J. R.; Dean, E.; Thomas, A.; Plack, C. J.; Gaffney, C. J.; Nuttall, H. E.
Show abstract
Repetitive sub-concussive head impacts are emerging as one of the most urgent and overlooked challenges in neurotrauma. Despite growing evidence of their neurological consequences, there is no validated objective biomarker for early and reliable detection. Cortical (N100) and subcortical (frequency following responses) to a speech syllable presented in (1) quiet and (2) six-talker background noise listening conditions were assessed using EEG in 60 tier-2 athletes (30 contact, 30 non-contact; age-, sex-, height-, body mass- and BMI-matched). Reduced cortical N100 amplitudes in contact athletes were confirmed by a significant group effect (F(1,54) = 9.16, p = .004), indicating early auditory cortical dysfunction as a measurable biomarker of sub-concussive exposure. Contact athletes also exhibited subtle hearing deficits and impaired self-reported speech perception, linking neural changes to real-world communication deficits. These findings were not related to cortical response amplitudes, suggesting that peripheral and cortical changes may occur independently following repetitive head impacts. Response timing and subcortical encoding were unaffected under both listening conditions. Our findings establish a selective auditory cortical vulnerability to repeated sub-concussive head impact exposure, providing the basis of an objective EEG-based monitoring tool that could help support athlete brain health and safety, and inform future research in contact sports.
Wright, S.; Banks, M. I.; Raz, A.
Show abstract
ObjectiveTo test the effect of Isoflurane on synaptic transmission of cortico-cortical and thalamocortical projections to the auditory cortex, and investigate how it modulates cortical sensory information processing to produce unconsciousness. MethodsUsing murine auditory thalamocortical brain slices, afferent pathways from the medial geniculate body (MGB) and layer 1 of the proximal cortex were stimulated to evoke excitatory postsynaptic potentials (eEPSPs) in cortical neurons. Whole-cell recordings were made from pyramidal and fast-spiking neurons in layer 2/3 and layer 5. eEPSPs were evaluated along with intrinsic membrane properties in response to stimulation of both pathways with and without isoflurane. ResultsIsoflurane administration resulted in significant eEPSP amplitude reduction following stimulation of both thalamic and cortical pathways, in layer 2/3 (p=0.015, p<0.001) and layer 5 (p<0.001, p<0.001) pyramidal neurons; while it only significantly reduced eEPSP amplitude in fast-spiking interneurons with cortical stimulation (p<0.001). Overall, isoflurane preferentially suppressed synaptic responses to cortico-cortical stimulation compared to thalamocortical (p=0.0002). Under isoflurane, cortico-cortical compared to thalamocortical stimulation evoked eEPSPs with reduced 10-90% rise time in both layer 2/3 and 5 pyramidal neurons, and shorter latency layer 5 neurons. Paired pulse ratio was not changed by isoflurane application, although an interesting loss of depression trend appear in layer 5 pyramidal neurons stimulated by cortical activation. Additional intrinsic neuronal measurements revealed that isoflurane reduced spike threshold significantly in both layer 2/3 and layer 5 neurons, reduced spike latency in layer 2/3 neurons, and input resistance in layer 5 neurons. However, these intrinsic neuronal changes were not seen in fast-spiking interneurons. All isoflurane induced changes were reversible during wash out. ConclusionsApplication of 1% isoflurane to brain slices significantly reduced the amplitudes of eEPSPs and modulated intrinsic neuronal properties. The effects on eEPSP amplitude were greater for cortical stimulation compared to thalamic stimulation. Isoflurane modulated intrinsic neuronal firing properties in pyramidal neurons, but not in fast-spiking interneurons.
Westner, B. U.; Luo, Y.; Piai, V.
Show abstract
Both episodic memory and word retrieval have been linked to power decreases in the alpha and beta oscillatory bands, but these patterns have rarely been related to each other, partly due to a lack of methodological approaches available. In this explorative study, we investigate the similarities and dissimilarities in the oscillatory fingerprints of the retrieval of words and episodes by directly comparing the activity patterns across time, frequency, and space. We acquired electroencephalography (EEG) data of participants performing a language and an episodic memory task based on the same stimulus material. With a newly developed approach, we directly compared the source-reconstructed oscillatory activity using mutual information and a feature-impact analysis. While left temporal and frontal regions showed dissimilarities between the tasks, right-hemispheric parietal regions exhibited similarities. We speculate that this could indicate a homologous function of these regions, potentially sharing less-specific representations between the tasks. We further uncovered a dissociation of the alpha and beta bands regarding the similarity across tasks. While the beta band was dissimilar between word and episodic memory retrieval, the alpha band seemed to contribute to the similarity we observed in right parietal regions. Whether this points to a task-unspecific function of the alpha band or a functional role in the retrieval process of the presumed representations, remains to be determined. In summary, we present an approach to study similarity across tasks using the temporal, spectral, and spatial dimensions of EEG data, and present results of exploring the shared oscillatory fingerprints between episodic memory and word retrieval.
Eccher, E.; Salva, O. R.; Chiandetti, C.; Vallortigara, G.
Show abstract
Numerical abilities are widespread in the animal kingdom and are not exclusive to humans. Domestic chicks (Gallus gallus) have been shown to discriminate numerosities spontaneously, but prior research has focused exclusively on the visual modality. Whether chicks can discriminate numerical information in the auditory domain remains unknown, despite evidence that they can perceive other auditory features such as tone and rhythm. In this study, we investigated spontaneous numerical discrimination in the auditory modality in naive domestic chicks. In Experiment 1, newly-hatched chicks were tested for their ability to discriminate between two auditory sequences differing in numerosity (4 vs. 12 identical sounds), with and without controlling for continuous variables such as duration and total sound amount. Experiment 2 examined chicks filial imprinting responses to familiar or unfamiliar numerosities. Experiment 3 controlled for potential spontaneous preferences for a single longer sound versus a shorter one. Our results showed a preference for the 12-sound sequence only when duration and total sound amount were not matched. When these continuous variables were controlled, no spontaneous numerical preference emerged. Experiment 2 revealed an overall preference for the 12-sound sequence regardless of imprinting conditions, while Experiment 3 confirmed that chicks do not have an inherent preference for longer sounds. These findings suggest that chicks are sensitive to overall magnitude in the auditory domain but do not spontaneously discriminate numerical differences when other continuous variables are held constant. Future studies will explore how specific stimulus features, such as heterogeneity of sounds, influence these preferences.
Leblond, S.; Baures, R.; Atger, T.; Poinsignon, M.; Cappe, C.; Roux, F.-E.
Show abstract
BackgroundAudiovisual integration is essential for daily functions such as speech comprehension. It relies on a temporal constraint whereby events from different sensory modalities are perceptually bound within a limited temporal window, the audiovisual temporal binding window, defining the range of stimulus onset asynchronies perceived as synchronous. While correlational neuroimaging studies (fMRI, EEG) have implicated a distributed network in audiovisual integration, the causal neural underpinnings of the temporal binding window remain largely unknown. ObjectiveTo identify cortical regions causally supporting audiovisual simultaneity judgment. Methods: Direct electrical stimulation (DES) was prospectively applied to 62 cortical sites during awake brain surgery in 39 patients. Patients performed an audiovisual simultaneity judgment task with varying stimuli onset asynchronies alongside standard sensory-motor, language, and visuospatial tasks. Montreal Neurological Institute coordinates were obtained for all stimulated areas. ResultsDES selectively impaired audiovisual simultaneity judgments while sparing other standard tasks, in 7 highly focal, right-hemispheric cortical sites (<1 cm2). Three sites were situated around the intraparietal sulcus, and four near the supplementary motor area. Stimulation of left-hemisphere sites produced non-selective impairments, also affecting language-related tasks. ConclusionsThese findings provide causal evidence for a right-lateralized frontoparietal network, involving focal regions near the intraparietal sulcus and supplementary motor area, in audiovisual temporal integration. Given the established roles of these regions in attentional and decisional processes, this study refines their contribution to the temporal binding window network and underscores the clinical importance of preserving this network during awake brain surgery.
Maeda, H.; Wang, S.; Funamizu, A.
Show abstract
Animals and humans use multiple behavioral strategies to perform tasks. However, neural implementations of multiple strategies remain elusive, as some studies propose distinct pathways, while others observe overlapping brain regions associated with strategies. We propose a hybrid deep reinforcement learning (H-DRL) method, in which one network model implements model-free and inference-based behaviors through synaptic plasticity and recurrent activity. H-DRL uses a single updating rule and switches the strategy according to task demands without an explicit arbitrator. H-DRL reproduced mixed strategies of humans in a two-step task. In the mouse perceptual decision-making task, H-DRL adapted the recurrent dynamics with rich learning when the task condition required inference-based behavior, while adopting model-free behavior with lazy learning for a simple condition. The activity of H-DRL units showed condition-dependent maintenance of previous events, consistent with orbitofrontal cortical activity in mice. Our model provides a unified view that one cortical network automatically determines strategies in use depending on task conditions.
Augsten, M.-L.; Lindenbeck, M. J.; Laback, B.
Show abstract
Cochlear implant (CI) users typically experience difficulties perceiving musical harmony due to a restricted spectro-temporal resolution at the electrode-nerve interface, resulting in limited pitch perception. We investigated how stimulus parameters affect discrimination of complex-tone triads (three-voice chords), aiming to identify conditions that maximize perceptual sensitivity. Six post-lingually deafened CI listeners completed a same/different task with harmonic complex tones, while spectral complexity, voice(s) containing a pitch change, and temporal synchrony (simultaneous vs. sequential triad presentation) were manipulated. CI listeners discriminated harmonically relevant one-semitone pitch changes within triads when spectral complexity was reduced to three or five components per voice, with significantly better performance for three-component compared to nine-component tones. Sensitivity was observed for pitch changes in the high voice or in both high and low voices, but not for changes in only the low voice. Single-voice sensitivity predicted simultaneous-triad sensitivity when controlling for spectral complexity and voice with pitch change. Contrary to expectations, sequential triad presentation did not improve discrimination. An analysis of processor pulse patterns suggests that difference-frequency cues encoded in the temporal envelope rather than place-of-excitation cues underlie perceptual triad sensitivity. These findings support reducing spectral complexity to enhance chord discrimination for CI users based on temporal cues.
Bjekic, J.; Zivanovic, M.; Miniussi, C.; Filipovic, S.
Show abstract
Transcranial electrical stimulation (tES) can modulate neural dynamics, yet its effects on memory are heterogeneous. Individual differences in cognitive profiles, may well be one of the potential causes by setting boundary conditions on the extent and mode of the tES-induced modulation of network dynamics. In a sham-controlled, within-subject study (N = 42), we compared the effects of tDCS (1.5 mA), tACS ({+/-}1.0 mA at individual theta frequency, ITF), and otDCS (1.5 mA {+/-} 0.5 mA at ITF) over the left posterior parietal cortex on object-location (OL) associative memory, and examined whether six cognitive abilities (figural reasoning, semantic, visuospatial, processing speed, working memory, mnemonic binding) moderate stimulation outcomes. Associative memory recognition improved selectively under theta-otDCS, whereas tDCS and tACS showed no significant group-level effects. Yet all tES protocols exhibited considerable interindividual variability. Relative to cognitive abilities, processing speed moderated tES effects in line with neural efficiency predictions, yielding greater gains in cognitively faster individuals. In contrast, mnemonic binding and figural reasoning moderated benefits in a compensatory manner, with larger improvements in lower-ability individuals. Overall, the effects of tES on associative memory were specific to the tES protocol and outcome measure while being strongly shaped by cognitive profile via complementing magnification and compensation mechanisms.